191 research outputs found

    On detecting harmonic oscillations

    Full text link
    In this paper, we focus on the following testing problem: assume that we are given observations of a real-valued signal along the grid 0,1,,N10,1,\ldots,N-1, corrupted by white Gaussian noise. We want to distinguish between two hypotheses: (a) the signal is a nuisance - a linear combination of dnd_n harmonic oscillations of known frequencies, and (b) signal is the sum of a nuisance and a linear combination of a given number dsd_s of harmonic oscillations with unknown frequencies, and such that the distance (measured in the uniform norm on the grid) between the signal and the set of nuisances is at least ρ>0\rho>0. We propose a computationally efficient test for distinguishing between (a) and (b) and show that its "resolution" (the smallest value of ρ\rho for which (a) and (b) are distinguished with a given confidence 1α1-\alpha) is O(ln(N/α)/N)\mathrm{O}(\sqrt{\ln(N/\alpha)/N}), with the hidden factor depending solely on dnd_n and dsd_s and independent of the frequencies in question. We show that this resolution, up to a factor which is polynomial in dn,dsd_n,d_s and logarithmic in NN, is the best possible under circumstances. We further extend the outlined results to the case of nuisances and signals close to linear combinations of harmonic oscillations, and provide illustrative numerical results.Comment: Published at http://dx.doi.org/10.3150/14-BEJ600 in the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Near-Optimal Recovery of Linear and N-Convex Functions on Unions of Convex Sets

    Full text link
    In this paper we build provably near-optimal, in the minimax sense, estimates of linear forms and, more generally, "NN-convex functionals" (the simplest example being the maximum of several fractional-linear functions) of unknown "signal" known to belong to the union of finitely many convex compact sets from indirect noisy observations of the signal. Our main assumption is that the observation scheme in question is good in the sense of A. Goldenshluger, A. Juditsky, A. Nemirovski, Electr. J. Stat. 9(2) (2015), arXiv:1311.6765, the simplest example being the Gaussian scheme where the observation is the sum of linear image of the signal and the standard Gaussian noise. The proposed estimates, same as upper bounds on their worst-case risks, stem from solutions to explicit convex optimization problems, making the estimates "computation-friendly.

    Solving Variational Inequalities with Monotone Operators on Domains Given by Linear Minimization Oracles

    Full text link
    The standard algorithms for solving large-scale convex-concave saddle point problems, or, more generally, variational inequalities with monotone operators, are proximal type algorithms which at every iteration need to compute a prox-mapping, that is, to minimize over problem's domain XX the sum of a linear form and the specific convex distance-generating function underlying the algorithms in question. Relative computational simplicity of prox-mappings, which is the standard requirement when implementing proximal algorithms, clearly implies the possibility to equip XX with a relatively computationally cheap Linear Minimization Oracle (LMO) able to minimize over XX linear forms. There are, however, important situations where a cheap LMO indeed is available, but where no proximal setup with easy-to-compute prox-mappings is known. This fact motivates our goal in this paper, which is to develop techniques for solving variational inequalities with monotone operators on domains given by Linear Minimization Oracles. The techniques we develope can be viewed as a substantial extension of the proposed in [5] method of nonsmooth convex minimization over an LMO-represented domain

    Non-asymptotic confidence bounds for the optimal value of a stochastic program

    Get PDF
    We discuss a general approach to building non-asymptotic confidence bounds for stochastic optimization problems. Our principal contribution is the observation that a Sample Average Approximation of a problem supplies upper and lower bounds for the optimal value of the problem which are essentially better than the quality of the corresponding optimal solutions. At the same time, such bounds are more reliable than "standard" confidence bounds obtained through the asymptotic approach. We also discuss bounding the optimal value of MinMax Stochastic Optimization and stochastically constrained problems. We conclude with a simulation study illustrating the numerical behavior of the proposed bounds

    Decomposition Techniques for Bilinear Saddle Point Problems and Variational Inequalities with Affine Monotone Operators on Domains Given by Linear Minimization Oracles

    Full text link
    The majority of First Order methods for large-scale convex-concave saddle point problems and variational inequalities with monotone operators are proximal algorithms which at every iteration need to minimize over problem's domain X the sum of a linear form and a strongly convex function. To make such an algorithm practical, X should be proximal-friendly -- admit a strongly convex function with easy to minimize linear perturbations. As a byproduct, X admits a computationally cheap Linear Minimization Oracle (LMO) capable to minimize over X linear forms. There are, however, important situations where a cheap LMO indeed is available, but X is not proximal-friendly, which motivates search for algorithms based solely on LMO's. For smooth convex minimization, there exists a classical LMO-based algorithm -- Conditional Gradient. In contrast, known to us LMO-based techniques for other problems with convex structure (nonsmooth convex minimization, convex-concave saddle point problems, even as simple as bilinear ones, and variational inequalities with monotone operators, even as simple as affine) are quite recent and utilize common approach based on Fenchel-type representations of the associated objectives/vector fields. The goal of this paper is to develop an alternative (and seemingly much simpler) LMO-based decomposition techniques for bilinear saddle point problems and for variational inequalities with affine monotone operators

    Nonparametric estimation by convex programming

    Full text link
    The problem we concentrate on is as follows: given (1) a convex compact set XX in Rn{\mathbb{R}}^n, an affine mapping xA(x)x\mapsto A(x), a parametric family {pμ()}\{p_{\mu}(\cdot)\} of probability densities and (2) NN i.i.d. observations of the random variable ω\omega, distributed with the density pA(x)()p_{A(x)}(\cdot) for some (unknown) xXx\in X, estimate the value gTxg^Tx of a given linear form at xx. For several families {pμ()}\{p_{\mu}(\cdot)\} with no additional assumptions on XX and AA, we develop computationally efficient estimation routines which are minimax optimal, within an absolute constant factor. We then apply these routines to recovering xx itself in the Euclidean norm.Comment: Published in at http://dx.doi.org/10.1214/08-AOS654 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org
    corecore